5 - Artificial Intelligence II [ID:47296]
50 von 855 angezeigt

Okay, so welcome back.

We are currently in the process of working on the basics of for building agents in non-observable,

non-static, non-reliable environments where we don't know which of the possible world

states we're in.

We have looked at the beginnings of probability theory, discrete probability theory, and

we've been keeping our eye open towards ways we can exploit the structure in the world,

where the full joint probability distribution, which is a very high dimensional rectangle

of values, actually can be condensed down to something with fewer values.

We've looked at a couple of computation rules.

The chain rule being the first, very simple idea, it's just an iterated product rule.

For the chain rule I would like you to remember that the order of the variables actually

makes a difference in the chain rule.

Depending on which way you order the variables in the chain rule, you'll have different

conditional probabilities to think about.

And if you order them cleverly, we can do a lot.

If you're dumb about ordering them, you can practice your computations quite a lot.

That was one thing we looked at.

The other idea, what are you called marginalization, meaning if we have a couple of variables, the

things we're interested in, and we have kind of the other variable, we can basically

sum up over all the other variables to get the conditional probability of the query variables.

That's the second thing.

The third thing was this magic trick of normalization, where we're basically exploiting this idea

that certain things have to add up to one, so we can distill out of this alpha factor,

which is then something like fifth or something like this, and that allows us to use the fact

that certain things add up to one, to get more information than we should really have,

or at least that's my feeling about it, which makes it magic.

And finally, we have this idea of Bayes rule, which allows us to switch between the diagnostic

and the causal direction of conditional probabilities.

So those are the things we've looked at, kind of rule-wise.

The last idea was conditional independence, conditional independence, this idea.

We would like to have independence of random variables, which we very often don't have.

In particular, in our dentist example, we don't have independence of toothache and catch.

But what we do have is conditional independence, where we have given that we know that we have

a cavity, toothache and catch are independent.

If you have, and that's something we very often have, that we have a single cause that gives

us multiple effects, then given that we know the cause, do we have a cavity or not?

The effects are actually independent, which means we have a certain multiplicative behavior.

Here it is.

So if we know the outcome of z, then the probability of z1 and z2 given z is the same as the probability

of z1 given z times z2 given z.

This actually makes it easier.

What you can see here is relations between things in the full joint, additional probability

distribution.

Certain things can be computed from certain other things, which means we have to do less

work and we have to store fewer numbers.

We have to assess fewer numbers.

Full independence is something that we are seeing quite often in nature, whereas independence

we are seeing relatively rarely.

This is the effect we are going to use mostly.

Independence is just a special case of conditional.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:27:09 Min

Aufnahmedatum

2023-05-02

Hochgeladen am

2023-05-03 11:09:05

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen